FusionPillars: A 3D Object Detection Network with Cross-Fusion and Self-Fusion

نویسندگان

چکیده

In the field of unmanned systems, cameras and LiDAR are important sensors that provide complementary information. However, question how to effectively fuse data from two different modalities has always been a great challenge. this paper, inspired by idea deep fusion, we propose one-stage end-to-end network named FusionPillars multisensor (namely point cloud camera images). It includes three branches: point-based branch, voxel-based an image-based branch. We design modules enhance voxel-wise features in pseudo-image: Set Abstraction Self (SAS) fusion module Pseudo View Cross (PVC) module. For single sensor, considering relationship between point-wise features, SAS self-fuses branch spatial information pseudo-image. sensors, through transformation images’ view, PVC introduces RGB as auxiliary cross-fuses pseudo-image image scales supplement color Experimental results revealed that, compared existing current networks, yield superior performance, with considerable improvement detection precision for small objects.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Self-Organizing, Adaptive Data Fusion for 3d Object Tracking

Data fusion concepts are a necessary basis for utilizing complex networks of sensors. A key feature for a robust data fusion system is adaptivity, both to be fault-tolerant and to run in a self-organizing manner. In this contribution a general framework for adaptive data fusion is established with object tracking as an application. The fusion algorithm of Democratic Integration is presented as ...

متن کامل

Neural network based 2D/3D fusion for robotic object recognition

We present a neural network based fusion approach for realtime robotic object recognition which integrates 2D and 3D descriptors in a flexible way. The presented recognition architecture is coupled to a real-time segmentation step based on 3D data, since a focus of our investigations is real-world operation on a mobile robot. As recognition must operate on imperfect segmentation results, we con...

متن کامل

Application of Combined Local Object Based Features and Cluster Fusion for the Behaviors Recognition and Detection of Abnormal Behaviors

In this paper, we propose a novel framework for behaviors recognition and detection of certain types of abnormal behaviors, capable of achieving high detection rates on a variety of real-life scenes. The new proposed approach here is a combination of the location based methods and the object based ones. First, a novel approach is formulated to use optical flow and binary motion video as the loc...

متن کامل

Silhouettes Fusion for 3D Shapes Modeling with Ghost Object Removal

In this paper, we investigate a practical framework to compute a 3D shape estimation of multiple objects in real-time from silhouette probability maps in multi-view environments. A popular method called Shape From Silhouette (SFS), computes a 3D shape estimation from binary silhouette masks. This method has several limitations: The acquisition space is limited to the intersection of the camera ...

متن کامل

Adaptive Decision Fusion in Detection Networks

In a detection network, the final decision is made by fusing the decisions from local detectors. The objective of that decision is to minimize the final error probability. To implement and optimal fusion rule, the performance of each detector, i.e. its probability of false alarm and its probability of missed detection as well as the a priori probabilities of the hypotheses, must be known. How...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Remote Sensing

سال: 2023

ISSN: ['2315-4632', '2315-4675']

DOI: https://doi.org/10.3390/rs15102692